Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Distributed feedback design and complexity constrained control are examples of problems posed within the domain of structured optimal feedback synthesis. The optimal feedback gain is typically a non-convex function of system primitives. However, in recent years, algorithms have been proposed to obtain locally optimal solutions. In applications to large-scale distributed control, the major obstacle is computational complexity. This paper addresses complexity through a combination of linear-algebraic techniques and computational methods adapted from both machine learning and reinforcement learning. It is shown that for general classes of optimal control problems, the objective function and its gradient can be computed from data. Transformations borrowed from the theory of reinforcement learning are adapted to obtain simulation-based algorithms for computing the structured optimal H2 feedback gain. Customized proximal algorithms based on gradient descent and incremental gradient are tested in computational experiments and their relative merits are discussed.more » « less
-
Many modern large-scale and distributed optimization problems can be cast into a form in which the objective function is a sum of a smooth term and a nonsmooth regularizer. Such problems can be solved via a proximal gradient method which generalizes standard gradient descent to a nonsmooth setup. In this paper, we leverage the tools from control theory to study global convergence of proximal gradient flow algorithms. We utilize the fact that the proximal gradient algorithm can be interpreted as a variable-metric gradient method on the forward-backward envelope. This continuously differentiable function can be obtained from the augmented Lagrangian associated with the original nonsmooth problem and it enjoys a number of favorable properties. We prove that global exponential convergence can be achieved even in the absence of strong convexity. Moreover, for in-network optimization problems, we provide a distributed implementation of the gradient flow dynamics based on the proximal augmented Lagrangian and prove global exponential stability for strongly convex problems.more » « less
-
We study the problem of sparse interaction topology identification using sample covariance matrix of the states of the network. We postulate that the statistics are generated by a stochastically-forced undirected consensus network with unknown topology in which some of the nodes may have access to their own states. We first propose a method for topology identification using a regularized Gaussian maximum likelihood framework where the $$\ell_1$$ regularizer is introduced as a means for inducing sparse network topology. We also develop a method based on growing a Chow-Liu tree that is well-suited for identifying the underlying structure of large-scale systems. We apply this technique to resting-state functional MRI (FMRI) data as well as synthetic datasets to illustrate the effectiveness of the proposed approach.more » « less
An official website of the United States government
